Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forward merge branch-24.06 into branch-24.08 #4489

Merged
merged 20 commits into from
Jun 19, 2024

Conversation

nv-rliu
Copy link
Contributor

@nv-rliu nv-rliu commented Jun 14, 2024

Replaces #4476

nv-rliu and others added 8 commits June 7, 2024 11:30
…#4475)

Addresses rapidsai#4474 

Currently `openmpi=5.0.3-hfd7b305_105` is blocking our CI `cpp_build`
job.

Most likely introduced by this PR:
conda-forge/openmpi-feedstock#158

This PR will unblock cugraph development until the issues are fixed.
Once that happens, the version pinning should be removed.
…sai#4464)

PyTorch 2.2+ is incompatible with the NCCL version on our containers.
Normally, this would not be an issue, but there is a bug in CuPy that
loads the system NCCL instead of the user NCCL. This PR binds the
PyTorch test dependency version to get around this issue.

---------

Co-authored-by: Bradley Dice <[email protected]>
Co-authored-by: Ralph Liu <[email protected]>
Co-authored-by: James Lamb <[email protected]>
Copy link
Member

@alexbarghi-nv alexbarghi-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@alexbarghi-nv alexbarghi-nv added feature request New feature or request non-breaking Non-breaking change labels Jun 14, 2024
@alexbarghi-nv
Copy link
Member

@nv-rliu I think this should be renamed "Forward merge branch-24.06 into branch-24.08

@tingyu66 tingyu66 changed the title Forward merge branch-24.08 into branch-24.06 Forward merge branch-24.06 into branch-24.08 Jun 14, 2024
@bdice bdice mentioned this pull request Jun 17, 2024
Comment on lines 60 to 61
PARALLEL_LEVEL=$(python -c \
"from math import ceil; from multiprocessing import cpu_count; print(ceil(cpu_count()/2))")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we recently added a PARALLEL_LEVEL environment variable to the rapids-configure-sccache script below:

That script is sourced earlier in this file.

Therefore you could simply use the value defined from that script instead of redefining it here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great. Just wanted to be aggressive here to rule out the impact of hyperthreading. Using all 64 logical cores to compile cugraph did fail at times on my workstation (w/ Threadripper 3975WX).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think hyperthreading is a factor on these CI machines -- we get the "real" number of cores, afaik. We should be safe to remove this. Let's wait to push until after CI runs on the current commit (I'm testing something else at the moment).

ci/build_wheel.sh Outdated Show resolved Hide resolved
@tingyu66 tingyu66 marked this pull request as draft June 18, 2024 12:55
@bdice bdice marked this pull request as ready for review June 18, 2024 23:06
@bdice
Copy link
Contributor

bdice commented Jun 19, 2024

I'm going to go ahead and trigger a merge. Builds all succeeded.

@bdice
Copy link
Contributor

bdice commented Jun 19, 2024

/merge

@rapids-bot rapids-bot bot merged commit f519ac1 into rapidsai:branch-24.08 Jun 19, 2024
131 checks passed
@nv-rliu nv-rliu deleted the branch-24.08-merge-24.06 branch June 24, 2024 13:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci conda feature request New feature or request non-breaking Non-breaking change python
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants